perm filename WEIZEN[PUB,JMC] blob
sn#207602 filedate 1976-03-22 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00008 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .require "memo.pub[let,jmc]" source
C00010 00003
C00016 00004
C00022 00005 .item←0
C00026 00006
C00048 00007
C00057 00008
C00067 ENDMK
C⊗;
.require "memo.pub[let,jmc]" source;
.cb AN UNREASONABLE BOOK
.skip 2
.begin indent 0;
Joseph Weizenbaum, %2Computer Power and Human Reason%1, W. H. Freeman Co.,
San Francisco 1975
.end;
This moralistic and incoherent book uses computer science and
technology as an illustration to support the view promoted by Lewis
Mumford, Theodore Roszak, and Jacques Ellul, that science has led to
an "immoral" view of man and the world. It also owes much in
rhetorical style and political presupposition to the new left.
I am frightened by its arguments that certain research should not be
done if it is based on or might result in an "obscene" picture of the
world and man. Worse yet, the notion of "obscenity" is vague enough
to admit arbitrary interpretations by activist bureaucrats.
.cb IT'S HARD TO FIGURE OUT WHAT HE REALLY BELIEVES ...
Weizenbaum's style involves making extreme statements which
are later qualified by contradictory statements. Therefore, almost
any quotation is out of context, so it is difficult to
summarize his contentions accurately.
The following passages illustrate the difficulty:
%2"In 1935, Michael Polanyi"%1, [British chemist and
philosopher of science, was told by] %2"Nicolai Bukharin, one of the
leading theoreticians of the Russian Communist party, ... [that]
'under socialism the conception of science pursued for its own sake
would disappear, for the interests of scientists would spontaneously
turn to the problems of the current Five Year Plan.' Polanyi sensed
then that 'the scientific outlook appeared to have produced a
mechanical conception of man and history in which there was no place
for science itself.' And further that 'this conception denied
altogether any intrinsic power to thought and thus denied any grounds
for claiming freedom of thought.'"%1 - from page 1. Well, that's
clear enough; Weizenbaum favors freedom of thought and science and is
worried about threats to them. But on page 265, we have
%2"Scientists who continue to prattle on about 'knowledge for its own
sake' in order to exploit that slogan for their self-serving ends
have detached science and knowledge from any contact with the real
world".%1 Here Weizenbaum seems to be against pure science, i.e.
research motivated solely by curiosity. We also have
%2"With few exceptions, there have been no results, from over twenty
years of artificial intelligence research, that have found their way
into industry generally or into the computer industry in
particular"%1. - page 229 This again suggests that industrial
results are necessary to validate science.
%2"Science promised man power. But as so often happens when people
are seduced by promises of power ... the price actually paid is
servitude and impotence"%1.
%2"Not only has our unbounded feeding on science caused us to become
dependent on it, but, as happens with many other drugs taken in
increasing dosages, science has been gradually converted into a slow
acting poison".%1 - page 13. These are qualified by
%2"I argue for the rational use of science and technology, not for
its mystification, let alone its abandonment"%1. - page 256
In reference to the proposal for a moratorium on certain experiments
with recombinant DNA because they might be dangerous, we
have %2"Theirs is certainly a step in the right direction, and their
initiative is to be applauded. Still, one may ask, why do they feel
they have to give a reason for what they recommend at all? Is not
the overriding obligation on men, including men of science, to exempt
life itself from the madness of treating everything as an object, a
sufficient reason, and one that does not even have to be spoken? Why
does it have to be explained? It would appear that even the noblest
acts of the most well-meaning people are poisoned by the corrosive
climate of values of our time."%1 Is Weizenbaum against all
experimental biology or even all experiments with DNA? I would
hesitate to conclude so from this quote; he may say the direct
opposite somewhere else.
%2"Those who know who and what they are do not need to ask what they
should do."%1 - page 273. Let me assure the reader that there is
nothing in the book that offers any way to interpret this
pomposity. The menace of such grandiloquent precepts is that they
require a priesthood to apply them to particular cases, and volunteer
priests quickly crystallize around any potential center of power.
%2"An individual is dehumanized whenever he is treated as less than a
whole person"%1. - page 266. This is also subject to priestly
interpretation as in the encounter group movement.
.CB BUT HERE'S A TRY A SUMMARIZING:
.item←0;
As these inconsistent passages show, it isn't easy to
determine Weizenbaum's position, but the following seem to be the
book's main points:
.bb "#. Computers cannot be made to reason as powerfully as humans."
This is supported by quoting overoptimistic predictions by computer
scientists and giving examples of non-verbal human communication.
However, Weizenbaum doesn't name any specific task that computers
cannot carry out, because he wishes %2"to avoid the unnecessary,
interminable, and ultimately sterile exercise of making a catalogue
of what computers will and will not be able to do, either here and
now or ever."%1
.bb "#. There are tasks that computers should not be programmed to do."
Some are tasks Weizenbaum thinks shouldn't be done at all -
mostly for new left reasons. One may quarrel with his politics, and
I do, but obviously computers shouldn't do what shouldn't be done.
However, Weizenbaum also objects to computer hookups to animal brains
and computer conducted psychiatric interviews. As to the former, I
couldn't tell whether he is an anti-vivisectionist, but he seems to
have additional reasons for calling them "obscene". The objection to
computers doing psychiatric interviews also has a component beyond
the conviction that they would necessarily do it badly. Thus he
says, %2"What can the psychiatrist's image of his patient be when he
sees himself, as a therapist, not as an engaged human being acting as
a healer, but as an information processor following rules, etc.?"%1
This seems like the renaissance era religious objections to
dissecting the human body that came up when science revived. Even
the Popes eventually convinced themselves that regarding the body as
a machine for scientific or medical purposes was quite compatible
with regarding it as the temple of the soul. Recently they have
taken the same view of studying mental mechanisms for scientific or
psychiatric purposes.
.bb "#. Science has led people to a wrong view of the world and of life."
The view is characterized as mechanistic, and the example of
clockwork is given. (It seems strange for a computer scientist to
give this example, because the advance of the computer model over
older mechanistic models is that computers can and clockwork can't
make decisions.) Apparently analysis of a living system as composed
of interacting parts rather than treating it as an unanalyzed whole
is bad.
.bb "#. Science is not the sole or even main source of reliable knowledge."
However, he doesn't
propose any other sources of knowledge or say what the limits of
scientific knowledge is except to characterize certain thoughts as
"obscene".
.bb "#. Certain people and institutions are bad."
These include the
Department of "Defense" (sic), %2Psychology Today%1, the %2New York
Times%1 Data Bank, compulsive computer programmers, Kenneth Colby,
Marvin Minsky, Roger Schank, Allen Newell, Herbert Simon, J.W.
Forrester, Edward Fredkin, B.F. Skinner, and, at one point of his
life, Warren McCulloch.
.cb THE ELIZA EXAMPLE
Perhaps the most interesting part of the book is the account
of his own program ELIZA that parodies Rogerian non-directive
psychotherapy and his anecdotal account of how some people ascribe
intelligence and personality to it. In my opinion, it is quite
natural for people who don't understand the notion of algorithm to
imagine that a computer computes analogously to the way a human
reasons. This leads to the idea that accurate computation entails
correct reasoning and even to the idea that computer malfunctions are
analogous to human neuroses and psychoses. Actually, programming a
computer to draw interesting conclusions from premises is very
difficult and only limited success has been attained. However, the
effect of these natural misconceptions shouldn't be exaggerated;
people readily understand the truth when it is explained, especially
when it applies to a matter that concerns them. In particular, when
an executive excuses a mistake by saying that he placed excessive
faith in a computer, a certain skepticism is called for.
Colby's (1973) study is interesting in this connection, but
the interpretation below is mine. Colby had psychiatrists interview
patients over a teletype line and also had them interview his PARRY
program that simulates a paranoid. Other psychiatrists were asked to
decide from the transcripts whether the interview was with a man or
with a program, and they did no better than chance. However, since
PARRY is incapable of the simplest causal reasoning, if you ask, "How
do you know the people following you are Mafia" and get a reply that
they look like Italians, this must be a man not PARRY. Curiously, it
is easier to imitate (well enough to fool a psychiatrist) the
emotional side of a man than his intellectual side. Probably the
subjects expected the machine to have more logical ability, and this
expectation contributed to their mistakes. Alas, random selection
from the directory of the Association for Computing Machinery did no
better.
It seems to me that ELIZA and PARRY show only that people,
including psychiatrists, often have to draw conclusions on slight
evidence, and are therefore easily fooled. If I am right, two
sentences of instruction would allow them to do better.
In his 1966 paper on ELIZA (incorrectly cited as 1965),
Weizenbaum writes,
%2"One goal for an augmented ELIZA program is thus a system which
already has access to a store of information about some aspect of the
real world and which, by means of conversational interaction with
people, can reveal both what it knows, i.e. behave as an information
retrieval system, and where its knowledge ends and needs to be
augmented. Hopefully the augmentation of its knowledge will also be
a direct consequence of its conversational experience. It is
precisely the prospect that such a program will converse with many
people and learn something from each of them which leads to the hope
that it will prove an interesting and even useful conversational
partner."%1 Too bad he didn't successfully pursue this goal; no-one
else has done so either. I think success would have required a better
understanding of formalization than is exhibited in the book.
.item←0;
.CB WHAT DOES HE SAY ABOUT COMPUTERS?
While Weizenbaum's main conclusions concern science in
general and are moralistic in character, some of his remarks about
computer science and AI are worthy of comment.
#. He concludes that since a computer cannot have the
experience of a man, it cannot understand a man. There are three
points to be made in reply. First, humans share each other's
experiences and those of machines or animals only to a limited
extent. In particular, men and women have different experiences.
Nevertheless, it is common in literature for a good writer to
show greater understanding of the experience of the opposite sex
than a poorer writer of that sex. Second, the notion of experience is
poorly understood; if we understood it better, we could reason about
whether a machine could have a simulated or vicarious experience
normally confined to humans. Third, what we mean by understanding is
poorly understood, so we don't yet know how to define whether a
machine understands something or not.
#. Like his predecessor critics of artificial intelligence,
Taube, Dreyfus and Lighthill,
Weizenbaum is impatient, concluding that if the problem hasn't been
solved in twenty years, we should give up. Genetics took about a
century to go from Mendel to the genetic code for proteins, and still
has a long way to go before we will fullly understand the genetics
and evolution of intelligence and behavior. Artificial intelligence
may be just as difficult. My current answer to the question of when
machines will reach human-level intelligence is that a precise
calculation shows that we are between 1.7 and 3.1 Einsteins and .3
Manhattan Projects away from the goal. However, the current research
is producing the information on which the Einstein will base himself
and is producing useful capabilities all the time.
#. The book nowhere distinguishes between formalizability and
computer simulatability. Formalizations in logic can allow answering
questions that cannot be answered by simulation, e.g. "would it ever
do so-and-so". Reaoning in a formal system can often use partial
information insufficient for simulation to answer such questions. In
fact, a simulation program for a phenomenon is only one of many
possible formalizations of the phenomenon. This review isn't the
place for a full explanation of the relations between these concepts.
.cb IN DEFENSE OF THE UNJUSTLY ATTACKED - SOME OF WHOM ARE INNOCENT
Here are defenses of Weizenbaum's targets. They are not
guaranteed to entirely suit the defendees.
Weizenbaum's conjecture that the Defense Department supports
speech recognition research in order to be able to snoop on telephone
conversations is biased, baseless, false, and seems motivated by
malice. The committee of scientists that proposed the project
advanced quite different considerations, and the high officials who
made the final decisions are not ogres. Anyway their other
responsibilities leave them no time for complicated and devious
considerations. I put this one first, because I think the failure of
many scientists to defend the Defense Department against attacks they
know are unjustified is unjust in itself, and furthermore has harmed
the country.
Referring to %2Psychology Today%1 as a cafeteria is clearly
intended to appeal to the snobbery of those who would like to
consider their psychological knowledge to be above the popular level.
So far as I know, professional and academic psychologists welcome the
opportunity offered by %2Psychology Today%1 to explain their ideas to
a wide public. They might even buy a cut-down version of
Weizenbaum's book if he asks them nicely. Hmm, they might even buy
this review. Actually, this attack was just a passing phrase, but
it's still immoral.
Weizenbaum has invented a %2New York Times Data Bank%1
different from the one operated by the %2New York Times%1 - and
possibly better. The real one stores abstracts written by humans and
doesn't use the tapes intended for typesetting machines. As a result
the user has access only to abstracts and cannot search on features
of the stories themselves, i.e. he is at the mercy of what the
abstractors thought was important at the time.
Using computer programs as psychotherapists, as Colby
proposed, would be moral if it would cure people. Unfortunately,
computer science isn't up to it, and maybe the psychiatrists aren't
either.
I agree with Minsky in criticizing the reluctance of art
theorists to develop formal theories. George Birkhoff's formal
theory was probably wrong, but he shouldn't have been criticized for
trying. The problem seems very difficult to me, and I have made no
significant progress in responding to a challenge from Arthur
Koestler to tell how a computer program might make or even recognize
jokes. Perhaps some reader of this review might have more success.
There is a whole chapter attacking "compulsive computer
programmers" or "hackers". This mythical beast lives in the computer
laboratory, is an expert on all the ins and outs of the time-sharing
system, elaborates the time-sharing system with arcane features that
he never documents, and is always changing the system before he even
fixes the bugs in the previous version. All these vices exist, but I
can't think of any individual who combines them, and people generally
outgrow them. As a laboratory director, I have to protect the
interests of people who program only part time against tendencies to
over-complicate the facilities. People who spend all their time
programming and who exchange information by word of mouth sometimes
have to be pressed to make proper writeups. The other side of the
issue is that we professors of computer science sometimes lose our
ability to write actual computer programs through lack of practice
and envy younger people who can spend full time in the laboratory.
The phenomenon is well known in other sciences and in other human
activities.
Sorry about that, but here comes another embedded quotation -
Weizenbaum quoting Roger Schank, %2"What is contributed when it is
asserted that 'there exists a conceptual base that is interlingual,
onto which linguistic structures in a given language map during the
understanding process and out of which such structures are created
during generation [of linguistic utterances]'? Nothing at all. For
the term 'conceptual base' could perfectly well be replaced by the
word 'something'. And who could argue with that so-transformed
statement?"%1 Weizenbaum goes on to say that the real scientific
problem "remains as untouched as ever". On the next page he says
that unless the "Schank-like scheme" understood the sentence %2"Will
you come to dinner with me this evening?"%1 to mean %2"a shy young man's
desperate longing for love%1, then the sense in which the system
"understands" is "about as weak as the sense in which ELIZA
"understood"". This good example raises interesting issues and
seems to call for some distinctions. Full understanding of the
sentence indeed results in knowing about the young man's desire for
love, but it would seem that there is a useful lesser level of
understanding in which the machine would know only that he would like
her to come to dinner.
Contrast Weizenbaum's demanding,
more-human-than-thou attitude to Schank and Winograd
with his respectful and even obsequious attitude to Chomsky. We have
%2"The linguist's first task is therefore to write grammars, that is,
sets of rules, of particular languages, grammars capable of
characterizing all and only the grammatically admissible sentences of
those languages, and then to postulate principles from which crucial
features of all such grammars can be deduced. That set of principles
would then constitute a universal grammar. Chomsky's hypothesis is,
to put it another way, that the rules of such a universal grammar
would constitute a kind of projective description of important
aspects of the human mind."%1 There is nothing here demanding
that the universal grammar take into account the young man's desire
for love. As far as I can see, Chomsky is just as much a rationalist
as we artificial intelligentsia.
Chomsky's goal of a universal grammar and Schank's goal of a
conceptual base are similar, except that Schank's ideas are further
developed, and the performance of his students' programs can be
compared with reality. I think they will require drastic revision
and may not be on the right track at all, but then I am pursuing a
rather different line of research concerning how to represent the
basic facts that an intelligent being must know about the world. My
idea is to start from epistemology rather than from language,
regarding their linguistic representation as secondary. This approach
has proved difficult, has attracted few practitioners, and has led to
few computer programs, but I still think it's right.
Weizenbaum approves of the Chomsky school's haughty attitude
towards Schank, Winograd and other AI based language researchers. On
page 184, he states, %2"many linguists, for example, Noam Chomsky,
believe that enough thinking about language remains to be done to
occupy them usefully for yet a little while, and that any effort to
convert their present theories into computer models would, if
attempted by the people best qualified, be a diversion from the main
task. And they rightly see no point to spending any of their
energies studying the work of the hackers."%1
This brings the chapter on "compulsive computer programmers" alias
"hackers" into a sharper focus. Chomsky's latest book
%2Reflections on Language%1 makes no reference to the work of
Winograd, Schank, Charniak, Wilks, Bobrow or William Woods to name
only a few of those who have developed large computer systems that
work with natural language and who write papers on the semantics of
natural language. The actual young computer programmers who call
themselves hackers and who come closest to meeting Weizenbaum's
description don't write papers on natural language. Therefore, it
would seem that the hackers whose work need not be studied are
Winograd, Schank, et. al. who are professors and senior scientists.
The Chomsky school may be embarassed by the fact that it has only
recently arrived at the conclusion that the semantics of natural
language is more fundamental than its syntax, while AI based
researchers have been pursuing this line for fifteen years.
The outside observer should be aware that to some extent this
is a pillow fight within M.I.T. Chomsky and Halle are not to be
dislodged from M.I.T. and neither is Minsky whose students have
pioneered the AI approach to natural language. Schank is quite
secure at Yale. Weizenbaum also has tenure. However, some assistant
professorships may be at stake, especially at M.I.T.
Allen Newell and Herbert Simon are criticized for being
overoptimistic and are considered morally defective for attempting to
describe humans as difference-reducing machines. Simon's view that
the human is a simple system in a complex environment is singled out
for attack. In my opinion, they were overoptimistic, because their
GPS model on which they put their bets wasn't good enough. Maybe
Newell's current %2production system models%1 will work out better.
As to whether human mental structure will eventually turn out to be
simple, I vacillate.
I regard Forrester's models as incapable of taking into
account qualitative changes, and the world models they have built as
defective even in their own terms, because they leave out
saturation-of-demand effects that cannot be discovered by curve-fitting
as long as a sytem is rate-of-expansion limited. Moreover, I don't
accept his claim that his models are better suited than the unaided
mind in "interpreting how social systems behave", but Weizenbaum's
sarcasm on page 246 is unconvincing. He quotes Forrester,
"[desirable modes of behavior of the social system] %2seem to be
possible only if we have a good understanding of the system dynamics
and are willing to endure the self-discipline and pressures that must
accompany the desirable mode"%1. Weizenbaum comments, %2"There is
undoubtedly some interpretation of the words 'system' and 'dynamics'
which would lend a benign meaning to this observation"%1. Sorry, but
it looks ok to me provided one is suitably critical of Forrester's
proposed social goals and the possibility of making the necessary
assumptions and putting them into his models.
Skinner's behaviorism that refuses to assign reality to
people's internal state seems wrong to me, but we can't call him
immoral for trying to convince us of what he thinks is true.
Weizenbaum quotes Edward Fredkin, former director of
Project MAC, and the late Warren McCulloch of M.I.T. without
giving their names. pp. 241 and 240. Perhaps he thinks a few
puzzles will make the book more interesting, and this is so.
Fredkin's plea for research in automatic programming seems to
overestimate the extent to which our society currently relies
on computers for decisions. It also overestimates the ability
of the faculty of a particular university to control the uses
to which technology will be put, and it underestimates the
difficulty of making knowledge based systems of practical use.
Weizenbaum is correct in pointing out that Fredkin doesn't
mention the existence of genuine conflicts in society, but
only the new left sloganeering elsewhere in the book gives a
hint as to what he thinks they are and how he proposes to
resolve them.
The quote from McCulloch is from an essay entitled "Toward
some circuitry of ethical robots..."(very long title) ACTA
BIOTHEORETICA XI 147-156, 1956. (Minsky tells me "this is a brave
attempt to find a dignified sense of freedom within the psychological
determinism morass.")
I think this can be done better now, but Weizenbaum implies that
his 1956 effort is to his moral discredit.
Finally, Weizenbaum attributes to me two statements - both
from oral presentations - which I cannot verify. One of them is
%2"The only reason we have not yet succeeded in simulating every
aspect of the real world is that we have been lacking a sufficiently
powerful logical calculus. I am working on that problem."%1 This
statement doesn't express my present opinion or my opinion in 1973
when I am alleged to have expressed it in a debate, and no-one has
been able to find it in the video-tape of the debate.
We can't simulate "every aspect of the real world", because
the initial state information is never available, the laws of motion
are imperfectly known, and the calculations for a simulation are too
extensive. Moreover, simulation wouldn't necessarily answer our
questions. Instead, we must find out how to represent in the memory
of a computer the information about the real world that is actually
available to a machine or organism with given sensory capability, and
also how to represent a means of drawing those useful conclusions
about the effects of courses of action that can be correctly inferred
from the attainable information. Having a %2sufficiently powerful
logical calculus%1 is an important part of this problem - but one of
the easier parts.
.cb A SUMMARY OF POLEMICAL SINS
The speculative sections of the book contain numerous dubious
little theories, such as this one about the dehumanizing effect of
of the invention of the clock: %2"The clock had created literally a new
reality; and that is what I meant when I said earlier that the trick
man turned that prepared the scene for the rise of modern science was
nothing less than the transformation of nature and of his perception
of reality. It is important to realize that this newly created
reality was and remains an impoverished version of the older one, for
it rests on a rejection of those direct experiences that formed the
basis for, and indeed constituted the old reality. The feeling of
hunger was rejected as a stimulus for eating; instead one ate when an
abstract model had achieved a certain state, i.e. when the hand of a
clock pointed to certain marks on the clock's face (the
anthropomorphism here is highly significant too), and similarly for
signals for sleep and rising, and so on."%1
This idealization of primitive life is simply thoughtless.
Like modern man, primitive man ate when the food was ready, and
primitive man probably had to start preparing it even further in
advance. Like modern man, primitive man lived in families whose
members are no more likely to become hungry all at once than are the
members of a present family.
I get the feeling that in toppling this microtheory I am not
playing the game; the theory is intended only to provide an
atmosphere, and like the reader of a novel, I am supposed to suspend
disbelief. But the contention that science has driven us from a
psychological Garden of Eden depends heavily on such word pictures.
By the way, I recall from my last sabbatical at M.I.T. that
the %2feeling of hunger%1 is more often the %2direct social stimulus
for eating%1 for the "hackers" deplored in Chapter 4 than it could
have been for primitive man. Oft on a crisp New England winter
night, even as the clock struck three, have I heard them call to one
another, messages flashed on the screens, a flock of hackers
magically gathered, and the whole picturesque assembly rushed
chattering off to Chinatown.
.item←0;
I find the book substandard as a piece of polemical writing
in the following respects:
#. The author has failed to work out for himself definite
positions on the issues he discusses. Making an extreme statement in
one place and a contradictory statement in another is no substitute
for trying to take all the factors into account and reach a
considered position. Unsuspicious readers can come away with a great
variety of views, and the book can be used to support contradictory
positions.
#. The computer linguists - Winograd, Schank, et. al. - are
denigrated as hackers and compulsive computer programmers by
innuendo.
One would like to know more precisely what biological and
psychological experiments and computer applications he finds
acceptable. Reviewers have already drawn a variety of conclusions on
this point.
#. The terms "authentic", "obscene", and "dehumanization are
used as clubs. This is what mathematicians call "proof by
intimidation".
#. The book encourages a snobbery that has no need to argue
for its point of view but merely utters code words, on hearing which
the audience is supposed applaud or hiss as the case may be. The
%2New Scientist%1 reviewer certainly salivates in most of the
intended places.
#. Finally, when moralizing is both vehement and vague, it
invites authoritarian abuse either by existing authority or by new
political movements. Imagine, if you can, that this book were the
bible of some bureaucracy, e.g. the Office of Technology Assessment,
that acquired power over the computing or scientific activities of a
university, state, or country. Suppose Weizenbaum's slogans were
combined with %2the bureaucratic ethic%1 that holds that any problem
can be solved by a law forbidding something and a bureaucracy of
eager young lawyers to enforce it. Postulate further a vague
%2Humane Research Act%1 and a "public interest" organization with
more eager young lawyers suing to get judges to legislate new
interpretations of the Act. One can see a laboratory needing more
lawyers than scientists and a Humane Research Administrator capable
of forbidding or requiring almost anything.
I see no evidence that Weizenbaum forsees his work being used
in this way; he doesn't use the phrase %2laisser innover%1 which is
the would-be science bureaucrat's analogue of the economist's
%2laisser faire%1, and he never uses the indefinite phrase "it should
be decided" which is a common expression of the bureaucratic ethic.
However, he has certainly given his fellow computer scientists at
least some reason to worry about potential tyranny.
By the way, like the curate's egg, the book is good in parts.
.CB WHAT WORRIES ABOUT COMPUTERS ARE WARRANTED?
Grumbling about Weizenbaum's mistakes and moralizing
is not enough. Genuine worries prompted the book, and many
people share them. Here are the genuine concerns that I can
identify and the opinions of one computer scientist about their
resolution: What is the danger that the computer will lead to a false
model of man? What is the danger that computers will be misused?
Can human-level artificial intelligence be achieved? What, if any,
motivational characteristics will it have? Would the achievement of
artificial intelligence be good or bad for humanity?
.item←0;
.bb "#. Does the computer model lead to a false model of man."
Historically, the mechanistic model of the world,
life included, followed animistic models. Replacing them by
mechanistic models replaced shamanism by medicine. The
pre-computer mechanistic models of the mind were, in my opinion,
unsuccessful. I think the psychologists pursuing computational
models of mental processes may eventually develop a really
beneficial psychiatry.
Philosophical and moral thinking has never found a model
of man that relates human beliefs and purposes to the physical
world in plausible way. Some of the unsuccessful attempts have
been more mechanistic than others. Both mechanistic and non-mechanistic
models have led to great harm when made the basis of political
ideology, because they have allowed tortuous reasoning to justify
actions that simple human intuition regards as immoral. In my
opinion, the relation between beliefs, purposes and wants to the
physical world is a complicated but ultimately solvable problem.
Computer models can help solve it, and can provide criteria
that will enable us to reject false solutions. The latter is
more important for now, and
computer models are already hastening the decay of dialectical
materialism in the Soviet Union.
.bb "#. What is the danger that computers will be misused?"
Up to now, computers have been just another labor-saving
technology. I am don't agree with Weizenbaum's acceptance of
the claim that our society would have been inundated by paper
work without computers. Without computers, people would work
a little harder and get a little less for their work. However,
when home terminals become available, social changes of the
magnitude of those produced by the telephone and automobile
will occur. I have discussed them elsewhere, and I think they
will be good - as were the changes produced by the automobile
and the telephone. Tyranny comes from control of the police
coupled with a tyrannical ideology; data banks will be a minor
convenience. No dictatorship yet has been overthrown for
lack of a data bank.
One's estimate of whether technology will work out well
in the future is correlated with one's view of how it worked out
in the past. I think it has worked out well and am optimistic
about the future. I feel that much current ideology is a
combination of older anti-scientific and anti-technological
views with new developments in the political technology of
instigating and manipulating guilt feelings.
.bb "#. What motivations will artificial intelligence have?"
It will have what motivations we choose to give it.
Those who finally create it should start by motivating
it only to answer questions and should have the sense to ask
for full pictures of the consequences of alternate actions
rather than simply how to achieve a fixed goal, ignoring
possible side-effects.
.bb "#. Will artificial intelligence be good or bad?"
Here we are talking about machines with the same range of
intellectual abilities as are posessed by humans. However, the
science fiction vision of robots with almost precisely the ability of
a human is quite unlikely, because the next generation of computers
or even hooking computers together would produce an intelligence that
might be qualitatively like that of a human, but thousands of times
faster. What would it be like to be able to put a hundred years
thought into every decision? I think it is impossible to say whether
qualitatively better answers would be obtained; we will have to try
it and see.
The achievement of above-human-level artificial intelligence
will open to humanity an incredible variety of options. We cannot
now fully envisage what these options will be, but it seems apparent
that one of the first uses of high-level artificial intelligence will
be to determine the consequences of alternate policies governing its
use. I think the most likely variant is that man will use artificial
intelligence to transform himself, but once the its properties and
the conequences of its use are known, we may decide not to use it.
Science would then be a sport like mountain climbing; the point will
be to discover the facts about the world using some stylized limited
means. I wouldn't like that, but man of that time will find our
opinion as relevant to him as we would find the opinion of
⊗Pithecanthropus about whether subsequent evolution took the right
course.
All these questions merit and have gotten more extensive
discussion, but I think the only rational policy now is to expect the
people confronted by the problem to understand their best interests
better than we now can. Even if full AI were to arrive next year,
this would be right. Correct decisions will require an intense
effort that cannot be mobilized to consider an eventuality that is
still remote. Imagine trying to persuade the presidential candidates
to debate on TV what each of them would do about each of the forms
that full AI might take.
%7This review is filed as WEIZEN.REV[PUB,JMC] at SU-AI on the ARPA net.
Any comments sent to JMC@SU-AI will be stored in the directory PUB,JMC
also known as %2McCarthy's Electric Magazine%7. The comment
files will be designated WEIZEN.1, WEIZEN.2, etc.%1
.begin verbatim
John McCarthy
Artificial Intelligence Laboratory
Stanford, California 94305
.end
{date}